Autoencoder Node Saliency: Selecting Relevant Latent Representations

نویسنده

  • Ya-Ju Fan
چکیده

The autoencoder is an artificial neural network that learns hidden representations of unlabeled data. With a linear transfer function it is similar to the principal component analysis (PCA). While both methods use weight vectors for linear transformations, the autoencoder does not come with any indication similar to the eigenvalues in PCA that are paired with eigenvectors. We propose a novel supervised node saliency (SNS) method that ranks the hidden nodes, which contain weight vectors for transformations. SNS is able to indicate the nodes specialized in a learning task. The latent representations of a hidden node can be described using a one-dimensional histogram. We apply normalized entropy difference (NED) to measure the ”interestingness” of the histograms, and conclude a property for NED values to identify a good classifying node. By applying our methods to real datasets, we demonstrate their ability to find valuable nodes and explain the learned tasks in autoencoders.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

On Nonparametric Guidance for Learning Autoencoder Representations

Unsupervised discovery of latent representations, in addition to being useful for density modeling, visualisation and exploratory data analysis, is also increasingly important for learning features relevant to discriminative tasks. Autoencoders, in particular, have proven to be an effective way to learn latent codes that reflect meaningful variations in data. A continuing challenge, however, is...

متن کامل

Saliency Cognition of Urban Monuments Based on Verbal Descriptions of Mental-Spatial Representations (Case Study: Urban Monuments in Qazvin)

Urban monuments encompass a wide range of architectural works either intentionally or unintentionally. These works are often salient due to their inherently explicit or hidden components and qualities in the urban context. Therefore, they affect the mental-spatial representations of the environment and make the city legible. However, the ambiguity of effective components often complicates their...

متن کامل

AAANE: Attention-based Adversarial Autoencoder for Multi-scale Network Embedding

Network embedding represents nodes in a continuous vector space and preserves structure information from the Network. Existing methods usually adopt a “one-size-fits-all” approach when concerning multi-scale structure information, such as firstand second-order proximity of nodes, ignoring the fact that different scales play different roles in the embedding learning. In this paper, we propose an...

متن کامل

Stick-breaking Variational Autoencoders

We extend Stochastic Gradient Variational Bayes to perform posterior inference for the weights of Stick-Breaking processes. This development allows us to define a Stick-Breaking Variational Autoencoder (SB-VAE), a Bayesian nonparametric version of the variational autoencoder that has a latent representation with stochastic dimensionality. We experimentally demonstrate that the SB-VAE, and a sem...

متن کامل

The Variational Fair Autoencoder

We investigate the problem of learning representations that are invariant to certain nuisance or sensitive factors of variation in the data while retaining as much of the remaining information as possible. Our model is based on a variational autoencoding architecture (Kingma & Welling, 2014; Rezende et al., 2014) with priors that encourage independence between sensitive and latent factors of va...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • CoRR

دوره abs/1711.07871  شماره 

صفحات  -

تاریخ انتشار 2017